1,230 research outputs found

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph

    Manpower planning and cycle-time reduction of a labor-intensive assembly line

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 49).The demand for Gas Lift Mandrels(GLM) in the oil and gas industry is expected to increase over the next few years, requiring Schlumberger's GLM assembly line to increase their manufacturing capacity. Given the labor-intensive nature of Schlumberger's GLM assembly line, other than implementing kaizens and purchasing more equipment, it is important to also consider manpower issues. This research analyzes manpower management issues in the GLM assembly line to meet the projected increase in customer demand over the next three years. A detailed time study was conducted to understand and characterize all processes in the assembly line, before manpower plans were drawn up for each year till 2013. Several manpower scheduling concepts were incorporated in the manpower plan, such as Job Rotation and Workforce Flexibility to optimize the rate of utilization, human performance and well-being. By clustering processes together, the labor utilization rate can be increased to more than ninety percent. A new position of grinders has also been proposed to assist in various grinding operations, in order to reduce the cycle times of processes, to help workers gain better focus in their work and to reduce the cost of labor.by Shao Chong Oh.M.Eng

    Resummation Prediction on Higgs and Vector Boson Associated Production with a Jet Veto at the LHC

    Get PDF
    We investigate the resummation effects for the SM Higgs and vector boson associated production at the LHC with a jet veto in soft-collinear effective theory using "collinear anomalous" formalism. We calculate the jet vetoed invariant mass distribution and the cross section for this process at Next-to-Next-to-Leading-Logarithmic level, which are matched to the QCD Next-to-Leading Order results, and compare the differences of the resummation effects with different jet veto pTvetop_{T}^{\rm veto} and jet radius RR. Our results show that both resummation enhancement effects and the scale uncertainties decrease with the increasing of jet veto pTvetop_{T}^{\rm veto} and jet radius RR, respectively. When pTveto=25p_{T}^{\rm veto}=25 GeV and R=0.4 (0.5)R=0.4~(0.5), the resummation effects reduce the scale uncertainties of the Next-to-Leading Order jet vetoed cross sections to about 7% (6%)7\%~(6\%), which lead to increased confidence on the theoretical predictions. Besides, after including resummation effects, the PDF uncertainties of jet vetoed cross section are about 7%7\%.Comment: 22 pages, 10 figures and 2 tables; final version in JHE

    Modeling of Turbulent Sooting Flames

    Full text link
    Modeling multiphase particles in turbulent fluid environment is a challenging task. To accurately describe the size distribution, a large number of scalars need to be transported at each time-step. Add to that the heat release and species mass fraction changes from nonlinear combustion chemistry reactions, and you have a tightly coupled set of equations that describe the (i) turbulence, (ii) chemistry, and (iii) soot particle interactions (physical agglomeration and surface chemistry reactions). Uncertainty in any one of these models will inadvertently introduce errors of up to a few orders of magnitude in predicted soot quantities. The objective of this thesis is to investigate the effect of turbulence and chemistry on soot evolution with respect to different soot aerosol models and to develop accurate models for simulating soot evolution in aircraft combustors. To investigate the effect of small scale turbulence time-scales on soot evolution, a partially-stirred reactor (PaSR) configuration is used and coupled with soot models from semi-empirical to detailed statistical models. Differences in soot property predictions including soot particle diameter and number density among the soot models are highlighted. The soot models will then be used to simulate the turbulent sooting flame in an aircraft swirl combustor to determine the large scale soot-turbulence-chemistry interactions. Highlights of this study include the differences in location of bulk soot mass production in the combustor using different soot models. A realistic aircraft combustor operating condition is simulated using a state-of-the-art minimally dissipative turbulent combustion solver and soot method of moments to investigate pressure scaling and soot evolution in different operating conditions. A separate hydrodynamic scaling is introduced to the pressure scaling, in addition to thermochemical scaling from previous studies. Finally, a Fourier analysis of soot evolution in the combustor will be discussed. A lower sooting frequency mode is found in the combustor, separate from the dominant fluid flow frequency mode that could affect statistical data collection for soot properties in turbulent sooting flame simulations.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147513/1/stchong_1.pd

    Search for the signal of monotop production at the early LHC

    Full text link
    We investigate the potential of the early LHC to discover the signal of monotops, which can be decay products of some resonances in models such as R-parity violating SUSY or SU(5), etc. We show how to constrain the parameter space of the models by the present data of ZZ boson hadronic decay branching ratio, K0−K0ˉK^0-\bar{K^0} mixing and dijet productions at the LHC. Then, we study the various cuts imposed on the events, reconstructed from the hadronic final states, to suppress backgrounds and increase the significance in detail. And we find that in the hadronic mode the information from the missing transverse energy and reconstructed resonance mass distributions can be used to specify the masses of the resonance and the missing particle. Finally, we study the sensitivities to the parameters at the LHC with s\sqrt{s}=7 TeV and an integrated luminosity of 1fb−11 {\rm fb}^{-1} in detail. Our results show that the early LHC may detect this signal at 5σ\sigma level for some regions of the parameter space allowed by the current data.Comment: 25 pages, 18 figures, 3 tables, version published in Phys.Rev.

    Threshold Resummation for WZ and ZZ Pair Production at the LHC

    Full text link
    We perform the threshold resummation for WZ and ZZ pair production at the next-to-next-to-leading logarithmic accuracy in Soft-Collinear Effective Theory at the LHC. Our results show that the resummation effects increase the total cross sections by about 7% for ZZ production and 12% for WZ production with$\sqrt{S}= 7,~8,~13 and 14 TeV, respectively, and the scale uncertainties are significantly reduced. Besides, our numerical results are well consistent with experimental data reported by the ATLAS and CMS collaborations.Comment: 14 pages, 8 figures, 2 tables, version published in Phys.Rev.

    Soft gluon resummation in the signal-background interference process of gg(→h∗)→ZZgg(\to h^*) \to ZZ

    Get PDF
    We present a precise theoretical prediction for the signal-background interference process of gg(→h∗)→ZZgg(\to h^*) \to ZZ, which is useful to constrain the Higgs boson decay width and to measure Higgs couplings to the SM particles. The approximate NNLO KK-factor is in the range of 2.05−2.452.05-2.45 (1.85−2.251.85-2.25), depending on MZZM_{ZZ}, at the 8 (13) TeV LHC. And the soft gluon resummation can increase the approximate NNLO result by about 10%10\% at both the 8 TeV and 13 TeV LHC. The theoretical uncertainties including the scale, uncalculated multi-loop amplitudes of the background and PDF+αs+\alpha_s are roughly O(10%)\mathcal{O}(10\%) at NNLL′{\rm NNLL'}. We also confirm that the approximate KK-factors in the interference and the pure signal processes are the same.Comment: 18 pages, 9 figures; v2 published in JHE

    Searching for the signal of dark matter and photon associated production at the LHC beyond leading order

    Full text link
    We study the signal of dark matter and photon associated production induced by the vector and axial-vector operators at the LHC, including the QCD next-to-leading order (NLO) effects. We find that the QCD NLO corrections reduce the dependence of the total cross sections on the factorization and renormalization scales, and the KK factors increase with the increasing of the dark matter mass, which can be as large as about 1.3 for both the vector and axial-vector operators. Using our QCD NLO results, we improve the constraints on the new physics scale from the results of the recent CMS experiment. Moreover, we show the Monte Carlo simulation results for detecting the \gamma+\Slash{E}_{T} signal at the QCD NLO level, and present the integrated luminosity needed for a 5σ5\sigma discovery at the 14 TeV LHC . If the signal is not observed, the lower limit on the new physics scale can be set.Comment: 19 pages, 18 figures, 2 tables, version published in Phys.Rev.

    Phenomenology of an Extended Higgs Portal Inflation Model after Planck 2013

    Full text link
    We consider an extended inflation model in the frame of Higgs portal model, assuming a nonminimal coupling of the scalar field to the gravity. Using the new data from Planck 20132013 and other relevant astrophysical data, we obtain the relation between the nonminimal coupling ξ\xi and the self-coupling λ\lambda needed to drive the inflation, and find that this inflationary model is favored by the astrophysical data. Furthermore, we discuss the constraints on the model parameters from the experiments of particle physics, especially the recent Higgs data at the LHC.Comment: 21 pages, 8 figures; Version published in EPJ
    • …
    corecore